programming4us
           
 
 
Windows Server

Windows Server 2008: Defining Capacity Analysis

- Free product key for windows 10
- Free Product Key for Microsoft office 365
- Malwarebytes Premium 3.7.1 Serial Keys (LifeTime) 2019
3/13/2011 3:13:57 PM
The majority of capacity analysis is working to minimize unknown or immeasurable variables, such as the number of gigabytes or terabytes of storage the system will need in the next few months or years, to adequately size a system. The high number of unknown variables is largely because network environments, business policy, and people are constantly changing. As a result, capacity analysis is an art as much as it involves experience and insight.

If you’ve ever found yourself having to specify configuration requirements for a new server or having to estimate whether your configuration will have enough power to sustain various workloads now and in the foreseeable future, proper capacity analysis can help in the design and configuration. These capacity-analysis processes help weed out the unknowns and assist you while making decisions as accurately as possible. They do so by giving you a greater understanding of your Windows Server 2008 R2 environment. This knowledge and understanding can then be used to reduce time and costs associated with supporting and designing an infrastructure. The result is that you gain more control over the environment, reduce maintenance and support costs, minimize firefighting, and make more efficient use of your time.

Business depends on network systems for a variety of different operations, such as performing transactions or providing security, so that the business functions as efficiently as possible. Systems that are underutilized are probably wasting money and are of little value. On the other hand, systems that are overworked or can’t handle workloads prevent the business from completing tasks or transactions in a timely manner, might cause a loss of opportunity, or keep the users from being productive. Either way, these systems are typically not much benefit to operating a business. To keep network systems well tuned for the given workloads, capacity analysis seeks a balance between the resources available and the workload required of the resources. The balance provides just the right amount of computing power for given and anticipated workloads.

This concept of balancing resources extends beyond the technical details of server configuration to include issues such as gauging the number of administrators that might be needed to maintain various systems in your environment. Many of these questions relate to capacity analysis, and the answers aren’t readily known because they can’t be predicted with complete accuracy.

To lessen the burden and dispel some of the mysteries of estimating resource requirements, capacity analysis provides the processes to guide you. These processes include vendor guidelines, industry benchmarks, analysis of present system resource utilization, and more. Through these processes, you’ll gain as much understanding as possible of the network environment and step away from the compartmentalized or limited understanding of the systems. In turn, you’ll also gain more control over the systems and increase your chances of successfully maintaining the reliability, serviceability, and availability of your system.

There is no set or formal way to start your capacity-analysis processes. However, a proven and effective means to begin to proactively manage your system is to first establish systemwide policies and procedures. Policies and procedures, discussed shortly, help shape service levels and users’ expectations. After these policies and procedures are classified and defined, you can more easily start characterizing system workloads, which will help gauge acceptable baseline performance values.

The Benefits of Capacity Analysis and Performance Optimization

The benefits of capacity analysis and performance optimization are almost inconceivable. Capacity analysis helps define and gauge overall system health by establishing baseline performance values, and then the analysis provides valuable insight into where the system is heading. Continuous performance monitoring and optimization will ensure systems are stable and perform well, reducing support calls from end users, which, in turn, reduces costs to the organization and helps employees be more productive. It can be used to uncover both current and potential bottlenecks and can also reveal how changing management activities can affect performance today and tomorrow.

Another benefit of capacity analysis is that it can be applied to small environments and scale well into enterprise-level systems. The level of effort needed to initially drive the capacity-analysis processes will vary depending on the size of your environment, geography, and political divisions. With a little up-front effort, you’ll save time, expense, and gain a wealth of knowledge and control over the network environment.

Establishing Policy and Metric Baselines

As mentioned earlier, it is recommended that you first begin defining policies and procedures regarding service levels and objectives. Because each environment varies in design, you can’t create cookie-cutter policies—you need to tailor them to your particular business practices and to the environment. In addition, you should strive to set policies that set user expectations and, more important, help winnow out empirical data.

Essentially, policies and procedures define how the system is supposed to be used—establishing guidelines to help users understand that the system can’t be used in any way they see fit. Many benefits are derived from these policies and procedures. For example, in an environment where policies and procedures are working successfully and where network performance becomes sluggish, it would be safe to assume that groups of people weren’t playing a multiuser network game, that several individuals weren’t sending enormous email attachments to everyone in the Global Address List, or that a rogue web or FTP server wasn’t placed on the network.

The network environment is shaped by the business more so than the IT department. Therefore, it’s equally important to gain an understanding of users’ expectations and requirements through interviews, questionnaires, surveys, and more. Some examples of policies and procedures that you can implement in your environment pertaining to end users could be the following:

  • Email message size, including attachments can’t exceed 10MB.

  • SQL Server databases settings will be enforced with Policy Based Management.

  • Beta software, freeware, and shareware can be installed only on test equipment (that is, not on client machines or servers in the production environment).

  • Specify what software is allowed to run on a user’s PC through centrally managed but flexible group policies.

  • All computing resources are for business use only (in other words, no gaming or personal use of computers is allowed).

  • Only business-related and approved applications will be supported and allowed on the network.

  • All home directories will be limited in size (for example, 500MB) per user.

  • Users must either fill out the technical support Outlook form or request assistance through the advertised help desk phone number.

Policies and procedures, however, aren’t just for end users. They can also be established and applied to IT personnel. In this scenario, policies and procedures can serve as guidelines for technical issues, rules of engagement, or an internal set of rules to abide by. The following list provides some examples of policies and procedures that might be applied to the IT department:

  • System backups must include System State data and should be completed by 5:00 a.m. each workday, and restores should be tested frequently for accuracy and disaster preparedness.

  • Routine system maintenance should be performed only outside of normal business hours, for example, weekdays between 8:00 p.m. and 12:00 a.m. or on weekends.

  • Basic technical support requests should be attended to within two business days.

  • Priority technical support requests should be attended to within four hours of the request.

  • Any planned downtime for servers should follow a change-control process and must be approved by the IT director at least one week in advance with a five-day lead time provided to those impacted by the change.

Benchmark Baselines

If you’ve begun defining policies and procedures, you’re already cutting down the number of immeasurable variables and amount of empirical data that challenge your decision-making process. The next step to prepare for capacity analysis is to begin gathering baseline performance values. The Microsoft Baseline Security Analyzer (MBSA) is an example of a tool that performs a security compliance scan against a predefined baseline.

Baselines give you a starting point with which you can compare results. For the most part, determining baseline performance levels involves working with hard numbers that represent the health of a system. On the other hand, a few variables coincide with the statistical representations, such as workload characterization, vendor requirements or recommendations, industry-recognized benchmarks, and the data that you collect.

Workload Characterization

Workloads are defined by how processes or tasks are grouped, the resources they require, and the type of work being performed. Examples of how workloads can be characterized include departmental functions, time of day, the type of processing required (such as batch or real-time), companywide functions (such as payroll), volume of work, and much more.

It is unlikely that each system in your environment is a separate entity that has its own workload characterization. Most, if not all, network environments have systems that depend on other systems or are even intertwined among different workloads. This makes workload characterization difficult at best.

So, why is workload characterization so important? Identifying systems’ workloads allows you to determine the appropriate resource requirements for each of them. This way, you can properly plan the resources according to the performance levels the workloads expect and demand.

Benchmarks

Benchmarks are a means to measure the performance of a variety of products, including operating systems, virtually all computer components, and even entire systems. Many companies rely on benchmarks to gain competitive advantage because so many professionals rely on them to help determine what’s appropriate for their network environment.

As you would suspect, Sales and Marketing departments all too often exploit the benchmark results to sway IT professionals over their way. For this reason, it’s important to investigate the benchmark results and the companies or organizations that produced the results. Vendors, for the most part, are honest with the results, but it’s always a good idea to check with other sources, especially if the results are suspicious. For example, if a vendor has supplied benchmarks for a particular product, check to make sure that the benchmarks are consistent with other benchmarks produced by third-party organizations (such as magazines, benchmark organizations, and in-house testing labs). If none are available, you should try to gain insight from other IT professionals or run benchmarks on the product yourself before implementing it in production.

Although some suspicion might arise from benchmarks because of the sales and marketing techniques, the real purpose of benchmarks is to point out the performance levels that you can expect when using the product. Benchmarks can be extremely beneficial for decision making, but they shouldn’t be your sole source for evaluating and measuring performance. Use the benchmark results only as a guideline or starting point when consulting benchmark results during capacity analysis. It’s also recommended that you pay close attention to their interpretation.

Table 1 lists companies or organizations that provide benchmark statistics and benchmark-related information, and some also offer tools for evaluating product performance.

Table 1. Organizations That Provide Benchmarks
Company/Organization NameWeb Address
The Tolly Groupwww.tollygroup.com
Transaction Processingwww.tpc.org/
Lionbridge (Veritest)www.etestinglabs.com/
Computer Measurement Groupwww.cmg.org/


Other -----------------
- Windows Server 2008: Performance and Reliability Monitoring (part 3) - Reports
- Windows Server 2008: Performance and Reliability Monitoring (part 2)
- Windows Server 2008: Performance and Reliability Monitoring (part 1)
- Windows Server 2008: Using Event Viewer for Logging and Debugging (part 3) - Conducting Additional Event Viewer Management Tasks
- Windows Server 2008: Using Event Viewer for Logging and Debugging (part 2)
- Windows Server 2008: Using Event Viewer for Logging and Debugging (part 1)
- Windows Server 2008: Using the Task Manager for Logging and Debugging (part 2)
- Windows Server 2008: Using the Task Manager for Logging and Debugging (part 1)
- Windows Server 2008: Enhancing Replication and WAN Utilization at the Branch Office
- Windows Server 2008: Understanding and Deploying BranchCache (part 3)
- Windows Server 2008: Understanding and Deploying BranchCache (part 2)
- Windows Server 2008: Understanding and Deploying BranchCache (part 1)
- Windows Server 2008 Server Core : Setting Security
- Windows Server 2008 Server Core : Creating LNK Files
- Configuring BitLocker Drive Encryption on a Windows Server 2008 R2 Branch Office Domain Controller (part 4)
- Configuring BitLocker Drive Encryption on a Windows Server 2008 R2 Branch Office Domain Controller (part 3) - Enabling BitLocker Drive Encryption when TPM Is Not Available
- Configuring BitLocker Drive Encryption on a Windows Server 2008 R2 Branch Office Domain Controller (part 2) - Enabling BitLocker Drive Encryption with TPM
- Configuring BitLocker Drive Encryption on a Windows Server 2008 R2 Branch Office Domain Controller (part 1)
- Windows Server 2008: Understanding BitLocker Drive Encryption
- Windows Server 2008: Installing a Read-Only Domain Controller (part 4) - Performing a Staged RODC Installation
 
 
 
Top 10
 
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Finding containers and lists in Visio (part 2) - Wireframes,Legends
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Finding containers and lists in Visio (part 1) - Swimlanes
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Formatting and sizing lists
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Adding shapes to lists
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Sizing containers
- Microsoft Access 2010 : Control Properties and Why to Use Them (part 3) - The Other Properties of a Control
- Microsoft Access 2010 : Control Properties and Why to Use Them (part 2) - The Data Properties of a Control
- Microsoft Access 2010 : Control Properties and Why to Use Them (part 1) - The Format Properties of a Control
- Microsoft Access 2010 : Form Properties and Why Should You Use Them - Working with the Properties Window
- Microsoft Visio 2013 : Using the Organization Chart Wizard with new data
- First look: Apple Watch

- 3 Tips for Maintaining Your Cell Phone Battery (part 1)

- 3 Tips for Maintaining Your Cell Phone Battery (part 2)
programming4us programming4us